53 research outputs found

    Learner-Friendly Kanji Learning System with Radical Analysis

    Get PDF
    This paper presents a novel friendly Kanji learning system using Radical Analysis to enable foreign people and elementary school students to learn Kanji by an interesting and efficient way. This way is for characters to analyze for each radical, to divide into some parts, and to correct strokes for each divided part. Here, the Radical Analysis Database (RAD) is used for dividing characters. RAD is a database to analyze characters for each radical and divide into some parts. On the other hand, characters are corrected by using a threshold. The threshold is a judgment value in the correction and learners can set it freely by handling threshold bars put on the interface. Then, the novel system is improved so that learners can set thresholds for each divided part. Since each bar corresponds to each part, the system judges whether each part is corrected or not according to set thresholds. Hence, since learners can freely determine radicals or parts in which they want to be instructed intensively, they can practice only their radicals not good or part of the character and easily master difficult characters, too. In addition, an animation helps learners understand the order of strokes virtually. Since each stroke used in this animation is displayed with different colors, learners can also understand virtually where the same strokes are from and to at once.DOI: http://dx.doi.org/10.11591/ijere.v1i1.47

    On-line Signature Verification based on Pen Inclination and Pressure Information

    Get PDF
    In this paper, the features that have personal characteristic using pen inclination and pressure information are discussed. Forging a pen inclination and pressure information is difficult because it is not visible. Four features using invisible information are proposed and their characteristics are discussed. Proposed features calculated by physical vector analysis are verified by SVC2004 database using DP matching algorithm. As a result, the new feature named Down improves the recognition rate and reliability. Average of correct verification rate is 94.57 % and variance is 0.667.DOI:http://dx.doi.org/10.11591/ijece.v2i4.47

    A Comprehensive Review of AI-enabled Unmanned Aerial Vehicle: Trends, Vision , and Challenges

    Full text link
    In recent years, the combination of artificial intelligence (AI) and unmanned aerial vehicles (UAVs) has brought about advancements in various areas. This comprehensive analysis explores the changing landscape of AI-powered UAVs and friendly computing in their applications. It covers emerging trends, futuristic visions, and the inherent challenges that come with this relationship. The study examines how AI plays a role in enabling navigation, detecting and tracking objects, monitoring wildlife, enhancing precision agriculture, facilitating rescue operations, conducting surveillance activities, and establishing communication among UAVs using environmentally conscious computing techniques. By delving into the interaction between AI and UAVs, this analysis highlights the potential for these technologies to revolutionise industries such as agriculture, surveillance practices, disaster management strategies, and more. While envisioning possibilities, it also takes a look at ethical considerations, safety concerns, regulatory frameworks to be established, and the responsible deployment of AI-enhanced UAV systems. By consolidating insights from research endeavours in this field, this review provides an understanding of the evolving landscape of AI-powered UAVs while setting the stage for further exploration in this transformative domain

    Automated analysis of sleep study parameters using signal processing and artificial intelligence.

    Get PDF
    An automated sleep stage categorization can readily face noise-contaminated EEG recordings, just as other signal processing applications. Therefore, the denoising of the contaminated signals is inevitable to ensure a reliable analysis of the EEG signals. In this research work, an empirical mode decomposition is used in combination with stacked autoencoders to conduct automatic sleep stage classification with reliable analytical performance. Due to the decomposition of the composite signal into several intrinsic mode functions, empirical mode decomposition offers an effective solution for denoising non-stationary signals such as EEG. Preliminary results showed that through these intrinsic modes, a signal with a high signal-to-noise ratio can be obtained, which can be used for further analysis with confidence. Therefore, later, when statistical features were extracted from the denoised signals and were classified using stacked autoencoders, improved results were obtained for Stage 1, Stage 2, Stage 3, Stage 4, and REM stage EEG signals using this combination

    Exploring Internet of Things Adoption Challenges in Manufacturing Firms: A Fuzzy Analytical Hierarchy Process Approach

    Full text link
    Innovation is crucial for sustainable success in today's fiercely competitive global manufacturing landscape. Bangladesh's manufacturing sector must embrace transformative technologies like the Internet of Things (IoT) to thrive in this environment. This article addresses the vital task of identifying and evaluating barriers to IoT adoption in Bangladesh's manufacturing industry. Through synthesizing expert insights and carefully reviewing contemporary literature, we explore the intricate landscape of IoT adoption challenges. Our methodology combines the Delphi and Fuzzy Analytical Hierarchy Process, systematically analyzing and prioritizing these challenges. This approach harnesses expert knowledge and uses fuzzy logic to handle uncertainties. Our findings highlight key obstacles, with "Lack of top management commitment to new technology" (B10), "High initial implementation costs" (B9), and "Risks in adopting a new business model" (B7) standing out as significant challenges that demand immediate attention. These insights extend beyond academia, offering practical guidance to industry leaders. With the knowledge gained from this study, managers can develop tailored strategies, set informed priorities, and embark on a transformative journey toward leveraging IoT's potential in Bangladesh's industrial sector. This article provides a comprehensive understanding of IoT adoption challenges and equips industry leaders to navigate them effectively. This strategic navigation, in turn, enhances the competitiveness and sustainability of Bangladesh's manufacturing sector in the IoT era

    Big Data - Supply Chain Management Framework for Forecasting: Data Preprocessing and Machine Learning Techniques

    Full text link
    This article intends to systematically identify and comparatively analyze state-of-the-art supply chain (SC) forecasting strategies and technologies. A novel framework has been proposed incorporating Big Data Analytics in SC Management (problem identification, data sources, exploratory data analysis, machine-learning model training, hyperparameter tuning, performance evaluation, and optimization), forecasting effects on human-workforce, inventory, and overall SC. Initially, the need to collect data according to SC strategy and how to collect them has been discussed. The article discusses the need for different types of forecasting according to the period or SC objective. The SC KPIs and the error-measurement systems have been recommended to optimize the top-performing model. The adverse effects of phantom inventory on forecasting and the dependence of managerial decisions on the SC KPIs for determining model performance parameters and improving operations management, transparency, and planning efficiency have been illustrated. The cyclic connection within the framework introduces preprocessing optimization based on the post-process KPIs, optimizing the overall control process (inventory management, workforce determination, cost, production and capacity planning). The contribution of this research lies in the standard SC process framework proposal, recommended forecasting data analysis, forecasting effects on SC performance, machine learning algorithms optimization followed, and in shedding light on future research

    QAmplifyNet: Pushing the Boundaries of Supply Chain Backorder Prediction Using Interpretable Hybrid Quantum - Classical Neural Network

    Full text link
    Supply chain management relies on accurate backorder prediction for optimizing inventory control, reducing costs, and enhancing customer satisfaction. However, traditional machine-learning models struggle with large-scale datasets and complex relationships, hindering real-world data collection. This research introduces a novel methodological framework for supply chain backorder prediction, addressing the challenge of handling large datasets. Our proposed model, QAmplifyNet, employs quantum-inspired techniques within a quantum-classical neural network to predict backorders effectively on short and imbalanced datasets. Experimental evaluations on a benchmark dataset demonstrate QAmplifyNet's superiority over classical models, quantum ensembles, quantum neural networks, and deep reinforcement learning. Its proficiency in handling short, imbalanced datasets makes it an ideal solution for supply chain management. To enhance model interpretability, we use Explainable Artificial Intelligence techniques. Practical implications include improved inventory control, reduced backorders, and enhanced operational efficiency. QAmplifyNet seamlessly integrates into real-world supply chain management systems, enabling proactive decision-making and efficient resource allocation. Future work involves exploring additional quantum-inspired techniques, expanding the dataset, and investigating other supply chain applications. This research unlocks the potential of quantum computing in supply chain optimization and paves the way for further exploration of quantum-inspired machine learning models in supply chain management. Our framework and QAmplifyNet model offer a breakthrough approach to supply chain backorder prediction, providing superior performance and opening new avenues for leveraging quantum-inspired techniques in supply chain management

    A Neural Attention-Based Encoder-Decoder Approach for English to Bangla Translation

    Get PDF
    Machine translation (MT) is the process of translating text from one language to another using bilingual data sets and grammatical rules. Recent works in the field of MT have popularized sequence-to-sequence models leveraging neural attention and deep learning. The success of neural attention models is yet to be construed into a robust framework for automated English-to-Bangla translation due to a lack of a comprehensive dataset that encompasses the diverse vocabulary of the Bangla language. In this study, we have proposed an English-to-Bangla MT system using an encoder-decoder attention model using the CCMatrix corpus. Our method shows that this model can outperform traditional SMT and RBMT models with a Bilingual Evaluation Understudy (BLEU) score of 15.68 despite being constrained by the limited vocabulary of the corpus. We hypothesize that this model can be used successfully for state-of-the-art machine translation with a more diverse and accurate dataset. This work can be extended further to incorporate several newer datasets using transfer learning techniques

    Gearbox fault diagnosis using improved feature representation and multitask learning

    Get PDF
    A gearbox is a critical rotating component that is used to transmit torque from one shaft to another. This paper presents a data-driven gearbox fault diagnosis system in which the issue of variable working conditions namely uneven speed and the load of the machinery is addressed. Moreover, a mechanism is suggested that how an improved feature extraction process and data from multiple tasks can contribute to the overall performance of a fault diagnosis model. The variable working conditions make a gearbox fault diagnosis a challenging task. The performance of the existing algorithms in the literature deteriorates under variable working conditions. In this paper, a refined feature extraction technique and multitask learning are adopted to address this variability issue. The feature extraction step helps to explore unique fault signatures which are helpful to perform gearbox fault diagnosis under uneven speed and load conditions. Later, these extracted features are provided to a convolutional neural network (CNN) based multitask learning (MTL) network to identify the faults in the provided gearbox dataset. A comparison of the experimental results of the proposed model with that of several already published state-of-the-art diagnostic techniques suggests the superiority of the proposed model under uneven speed and load conditions. Therefore, based on the results the proposed approach can be used for gearbox fault diagnosis under uneven speed and load conditions

    Korean Sign Language Recognition Using Transformer-Based Deep Neural Network

    No full text
    Sign language recognition (SLR) is one of the crucial applications of the hand gesture recognition and computer vision research domain. There are many researchers who have been working to develop a hand gesture-based SLR application for English, Turkey, Arabic, and other sign languages. However, few studies have been conducted on Korean sign language classification because few KSL datasets are publicly available. In addition, the existing Korean sign language recognition work still faces challenges in being conducted efficiently because light illumination and background complexity are the major problems in this field. In the last decade, researchers successfully applied a vision-based transformer for recognizing sign language by extracting long-range dependency within the image. Moreover, there is a significant gap between the CNN and transformer in terms of the performance and efficiency of the model. In addition, we have not found a combination of CNN and transformer-based Korean sign language recognition models yet. To overcome the challenges, we proposed a convolution and transformer-based multi-branch network aiming to take advantage of the long-range dependencies computation of the transformer and local feature calculation of the CNN for sign language recognition. We extracted initial features with the grained model and then parallelly extracted features from the transformer and CNN. After concatenating the local and long-range dependencies features, a new classification module was applied for the classification. We evaluated the proposed model with a KSL benchmark dataset and our lab dataset, where our model achieved 89.00% accuracy for 77 label KSL dataset and 98.30% accuracy for the lab dataset. The higher performance proves that the proposed model can achieve a generalized property with considerably less computational cost
    • …
    corecore